Goto

Collaborating Authors

 inconspicuous black-box adversarial attack


AdvFlow: Inconspicuous Black-box Adversarial Attacks using Normalizing Flows

Neural Information Processing Systems

Deep learning classifiers are susceptible to well-crafted, imperceptible variations of their inputs, known as adversarial attacks.

  advflow, inconspicuous black-box adversarial attack, name change, (5 more...)

AdvFlow: Inconspicuous Black-box Adversarial Attacks using Normalizing Flows

Neural Information Processing Systems

Deep learning classifiers are susceptible to well-crafted, imperceptible variations of their inputs, known as adversarial attacks. In this paper, we introduce AdvFlow: a novel black-box adversarial attack method on image classifiers that exploits the power of normalizing flows to model the density of adversarial examples around a given target image. We see that the proposed method generates adversaries that closely follow the clean data distribution, a property which makes their detection less likely. Also, our experimental results show competitive performance of the proposed approach with some of the existing attack methods on defended classifiers.